Improve the performance of creating FairSchedulingAlgoContext#4807
Improve the performance of creating FairSchedulingAlgoContext#4807JamesMurkin wants to merge 9 commits intomasterfrom
Conversation
When creating the fairshare scheduling context, we loop through all jobs of the jobdb and then filter out the ones relevant to the current pool We then use these jobs to calculate relevant info about the pool - Which jobs are on which nodes - The demand of each queue (and subsequently the fairshare/adjusted fairshare) This is simple but quite inefficient if you have a lot (millions) of jobs in the system and many pools (most of which only a small fraction of jobs interact with) - As a result most of the scheduling time can be taken up by this data shuffling Now instead of loading all jobs, we'll load on relevant jobs: - All leased jobs (needed to calculate which jobs are on which nodes) - Possibly this could be further improved by storing them by node but for now this is simple - All queued jobs against pools we are processing (current pool + away pools) This should have a significant impact, especially in the case we have 1 pool with most jobs queued against it and many smaller pools There is larger reworks we could do here to make it even more efficient, however for now this should give us most the benefit for a minor change - Example would be to calculate demand for each pool + jobs on each node upfront. Incrementally update this as pools are processed rather than recalculate it entirely Signed-off-by: JamesMurkin <jamesmurkin@hotmail.com>
Greptile SummaryThis PR significantly improves the performance of Key changes:
Confidence Score: 5/5Safe to merge — the optimization is logically equivalent to the previous full-scan approach, previously flagged bugs are fixed, and the new indexes are well-tested. All remaining findings are P2: a missing capacity hint in No files require special attention. Important Files Changed
Sequence DiagramsequenceDiagram
participant SA as FairSchedulingAlgo
participant TXN as JobDb Txn
participant CSI as calculateJobSchedulingInfo
SA->>TXN: GetAllLeasedJobs()
TXN-->>SA: leased []Job
SA->>TXN: GetAllTerminalJobs()
TXN-->>SA: terminal []Job
loop for each pool in allPools
SA->>TXN: GetQueuedJobsByPool(pool)
TXN-->>SA: queued []Job
end
SA->>SA: UniqueBy(allJobs, job.Id())
SA->>CSI: calculateJobSchedulingInfo(allJobs, ...)
CSI-->>SA: jobSchedulingInfo (demand, allocation, shortJobPenalty, jobsByPool)
SA->>SA: build nodeDb from leasedJobs (jobsByPool) using currentPoolJobs + otherPoolsJobs
SA-->>SA: FairSchedulingAlgoContext
Reviews (8): Last reviewed commit: "Merge branch 'master' into improve_fsctx..." | Re-trigger Greptile |
Signed-off-by: JamesMurkin <jamesmurkin@hotmail.com>
Signed-off-by: JamesMurkin <jamesmurkin@hotmail.com>
Signed-off-by: JamesMurkin <jamesmurkin@hotmail.com>
Signed-off-by: JamesMurkin <jamesmurkin@hotmail.com>
When creating the fairshare scheduling context, we loop through all jobs of the jobdb and then filter out the ones relevant to the current pool
We then use these jobs to calculate relevant info about the pool
This is simple but quite inefficient if you have a lot (millions) of jobs in the system and many pools (most of which only a small fraction of jobs interact with)
Now instead of loading all jobs, we'll load on relevant jobs:
This should have a significant impact, especially in the case we have 1 pool with most jobs queued against it and many smaller pools
There is larger reworks we could do here to make it even more efficient, however for now this should give us most the benefit for a minor change